Implicit regularization is an important way to interpret neural networks. Recent theory starts to explain implicit regularization with the model of deep matrix factorization (DMF) and analyze the trajectory of discrete gradient dynamics in the optimization process. These discrete gradient dynamics are relatively small but not infinitesimal, thus fitting well with the practical implementation of neural networks. Currently, discrete gradient dynamics analysis has been successfully applied to shallow networks but encounters the difficulty of complex computation for deep networks. In this work, we introduce another discrete gradient dynamics approach to explain implicit regularization, i.e. landscape analysis. It mainly focuses on gradient regions, such as saddle points and local minima. We theoretically establish the connection between saddle point escaping (SPE) stages and the matrix rank in DMF. We prove that, for a rank-R matrix reconstruction, DMF will converge to a second-order critical point after R stages of SPE. This conclusion is further experimentally verified on a low-rank matrix reconstruction problem. This work provides a new theory to analyze implicit regularization in deep learning.
translated by 谷歌翻译
In this paper, we propose PanoViT, a panorama vision transformer to estimate the room layout from a single panoramic image. Compared to CNN models, our PanoViT is more proficient in learning global information from the panoramic image for the estimation of complex room layouts. Considering the difference between a perspective image and an equirectangular image, we design a novel recurrent position embedding and a patch sampling method for the processing of panoramic images. In addition to extracting global information, PanoViT also includes a frequency-domain edge enhancement module and a 3D loss to extract local geometric features in a panoramic image. Experimental results on several datasets demonstrate that our method outperforms state-of-the-art solutions in room layout prediction accuracy.
translated by 谷歌翻译
Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, logic language is used as representations of knowledge (facts and rules, more specifically). However, logic language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and incapacity to handle ambiguous input. To this end, we propose a new task, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language. New automatic metrics are also proposed and analysed for the evaluation of this task. With DEER, we investigate a modern approach for inductive reasoning where we use natural language as representation for knowledge instead of logic language and use pretrained language models as ''reasoners''. Moreover, we provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts. We also propose a new framework drawing insights from philosophy literature for this task, which we show in the experiment section that surpasses baselines in both automatic and human evaluations.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
Aspect-based sentiment analysis (ABSA) aims at extracting opinionated aspect terms in review texts and determining their sentiment polarities, which is widely studied in both academia and industry. As a fine-grained classification task, the annotation cost is extremely high. Domain adaptation is a popular solution to alleviate the data deficiency issue in new domains by transferring common knowledge across domains. Most cross-domain ABSA studies are based on structure correspondence learning (SCL), and use pivot features to construct auxiliary tasks for narrowing down the gap between domains. However, their pivot-based auxiliary tasks can only transfer knowledge of aspect terms but not sentiment, limiting the performance of existing models. In this work, we propose a novel Syntax-guided Domain Adaptation Model, named SDAM, for more effective cross-domain ABSA. SDAM exploits syntactic structure similarities for building pseudo training instances, during which aspect terms of target domain are explicitly related to sentiment polarities. Besides, we propose a syntax-based BERT mask language model for further capturing domain-invariant features. Finally, to alleviate the sentiment inconsistency issue in multi-gram aspect terms, we introduce a span-based joint aspect term and sentiment analysis module into the cross-domain End2End ABSA. Experiments on five benchmark datasets show that our model consistently outperforms the state-of-the-art baselines with respect to Micro-F1 metric for the cross-domain End2End ABSA task.
translated by 谷歌翻译
激光点云(LPC)的非均匀分布和极稀疏的性质给其高效压缩带来了重大挑战。本文提出了一个新颖的端到端,完全物质的深层框架,该框架将原始LPC编码为OCTREE结构,并分层分解OCTREE熵模型。所提出的框架利用层次的潜在变量作为侧面信息来封装兄弟姐妹和祖先依赖性,该依赖性为点云分布的建模提供了足够的上下文信息,同时启用了同一层中的Octree节点的并行编码和解码。此外,我们提出了一个用于压缩潜在变量的残留编码框架,该框架通过渐进的下采样探索了每一层的空间相关性,并用完全属于熵模型对相应的残差进行建模。此外,我们提出了剩余编码的软添加和减法,以提高网络灵活性。 LIDAR基准Semantickitti和MPEG指定数据集福特的综合实验结果表明,我们提出的框架在所有以前的LPC框架中都实现了最先进的性能。此外,我们的端到端,完全物质化的框架被实验证明是高平行和及时效率的,并且与以前的LPC压缩方法相比,与以前的最新方法相比,可以节省超过99.8%的解码时间。
translated by 谷歌翻译
很少有射击分类需要深层神经网络才能仅从有限的培训图像中学习广义表示,这在低数据制度中很有挑战,但很重要。最近,基于剪辑的方法显示出有希望的很少的射击性能受益于对比的语言图像预训练。基于这一点,我们质疑大规模的预训练是否可以减轻少数数据的缺陷,并通过预测的知识帮助代表性学习。在本文中,我们提出了Como,这是对预培训模型的合作,该模型结合了来自各种培训范式的各种先验知识,以获得更好的几次学习。我们的科莫包括:剪辑的语言对比知识,迪诺的视力对抗性知识以及达尔 - E的语言基础知识。具体而言,科莫在两个方面工作:很少的数据扩展和多样化的知识合奏。首先,我们通过零摄影dall-e生成合成图像,以丰富少量训练数据,而无需任何人力。另一方面,我们引入了一个可学习的多知识适配器(MK-apapter),以适应剪辑和恐龙的预测。通过这种合作,COMO可以完全释放不同的预训练方法的潜力,并将其统一以进行几次分类。我们在11个数据集上进行了广泛的实验,以证明我们方法的优势和概括能力。
translated by 谷歌翻译
Hazop是为揭示行业危害的安全范式,其报告涵盖了有价值的危害事件(HAE)。 HAE分类的研究具有许多不可替代的务实值。但是,没有研究对此主题如此关注。在本文中,我们提出了一种新颖的深度学习模型,称为DLF,从语言的角度通过分形方法探索HAE分类。动机是(1):HAE自然可以被视为一种时间序列; (2):HAE的含义是由单词排列驱动的。具体而言,首先我们采用bert来矢量化hae。然后,我们提出了一种称为HMF-DFA的新的多型方法,通过分析被视为时间序列的HAE矢量来计算HAE分形系列。最后,我们设计了一个新的分层门控神经网络(HGNN)来处理HAE分形系列以完成HAE的分类。我们进行了18个过程进行案例研究。我们根据他们的Hazop报告启动实验。实验结果表明,我们的DLF分类器令人满意和有前途,提出的HMF-DFA和HGNN有效,并且将语言分形引入HAE是可行的。我们的HAE分类系统可以为Hazop提供服务,并为专家,工程师,员工和其他企业带来应用激励措施,这有利于工业安全的智能发展。我们希望我们的研究能为工业安全和分形理论的日常实践提供更多支持。
translated by 谷歌翻译
Hazop可以将危害作为文本信息暴露,研究其分类对于工业信息学的发展具有重要意义,这有利于安全性预警,决策支持,政策评估等。但是,对这一重要的研究没有研究目前。在本文中,我们提出了一种通过深度学习危害分类来称为DLGM的新型模型。具体而言,首先,我们利用BERT将危险矢量化并将其视为时间序列(HTS)。其次,我们构建了一个灰色模型FSGM(1,1)来对其进行建模,并从结构参数的意义上获得灰色指导。最后,我们设计了一个层次 - 特征融合神经网络(HFFNN),以从三个主题中使用灰色指导(HTSGG)调查HTS,其中HFFNN是一种具有四种模块的层次结构:两种功能编码器,一个门控机制,和一个门控机制和一个模块。加深机制。我们将18个工业流程作为应用程序案例,并启动一系列实验。实验结果证明,DLGM有望成为危险分类的才能,FSGM(1,1)和HFFNN具有有效性。我们希望我们的研究能为工业安全的日常实践贡献价值和支持。
translated by 谷歌翻译